在本文中,我们建议利用对话的独特特征,共享参与者的常识性知识,以解决总结它们的困难。我们提出了病态的框架,该框架使用常识推论作为其他背景。与以前仅依赖于输入对话的工作相比,Sick使用外部知识模型来生成丰富的常识推断,并选择具有基于相似性选择方法的最可能的推理。基于生病的,病人++的理解为监督,在总结多任务学习环境中的对话时,添加了产生常识推断的任务。实验结果表明,通过注入常识性知识,我们的框架比现有方法产生更多信息和一致的摘要。
translated by 谷歌翻译
常识性推理系统应该能够推广到各种推理案例。但是,大多数最先进的方法都取决于昂贵的数据注释,并且在不学习如何执行一般语义推理的情况下过度适合特定基准。为了克服这些缺点,零射击质量检查系统通过将常识性知识图(kg)转换为合成质量质量质量质量验证样本进行模型训练,已将有望作为强大的学习方案显示出来。考虑到不断增加的不同常识性KG类型,本文旨在将零拍传输的学习方案扩展到多种源设置,在这种设置中,可以协同使用不同的KGS。为了实现这一目标,我们建议通过将知识聚合的模块化变体作为一个新的零摄影常识性推理框架来减轻不同知识源之间的干扰丧失。五个常识性推理基准的结果证明了我们框架的功效,从而改善了多个公斤的性能。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Yes. In this paper, we investigate strong lottery tickets in generative models, the subnetworks that achieve good generative performance without any weight update. Neural network pruning is considered the main cornerstone of model compression for reducing the costs of computation and memory. Unfortunately, pruning a generative model has not been extensively explored, and all existing pruning algorithms suffer from excessive weight-training costs, performance degradation, limited generalizability, or complicated training. To address these problems, we propose to find a strong lottery ticket via moment-matching scores. Our experimental results show that the discovered subnetwork can perform similarly or better than the trained dense model even when only 10% of the weights remain. To the best of our knowledge, we are the first to show the existence of strong lottery tickets in generative models and provide an algorithm to find it stably. Our code and supplementary materials are publicly available.
translated by 谷歌翻译
This paper presents a solution to the Weather4cast 2022 Challenge Stage 2. The goal of the challenge is to forecast future high-resolution rainfall events obtained from ground radar using low-resolution multiband satellite images. We suggest a solution that performs data preprocessing appropriate to the challenge and then predicts rainfall movies using a novel RainUNet. RainUNet is a hierarchical U-shaped network with temporal-wise separable block (TS block) using a decoupled large kernel 3D convolution to improve the prediction performance. Various evaluation metrics show that our solution is effective compared to the baseline method. The source codes are available at https://github.com/jinyxp/Weather4cast-2022
translated by 谷歌翻译
Question Answering (QA) is a task that entails reasoning over natural language contexts, and many relevant works augment language models (LMs) with graph neural networks (GNNs) to encode the Knowledge Graph (KG) information. However, most existing GNN-based modules for QA do not take advantage of rich relational information of KGs and depend on limited information interaction between the LM and the KG. To address these issues, we propose Question Answering Transformer (QAT), which is designed to jointly reason over language and graphs with respect to entity relations in a unified manner. Specifically, QAT constructs Meta-Path tokens, which learn relation-centric embeddings based on diverse structural and semantic relations. Then, our Relation-Aware Self-Attention module comprehensively integrates different modalities via the Cross-Modal Relative Position Bias, which guides information exchange between relevant entities of different modalities. We validate the effectiveness of QAT on commonsense question answering datasets like CommonsenseQA and OpenBookQA, and on a medical question answering dataset, MedQA-USMLE. On all the datasets, our method achieves state-of-the-art performance. Our code is available at http://github.com/mlvlab/QAT.
translated by 谷歌翻译
Recent scene graph generation (SGG) frameworks have focused on learning complex relationships among multiple objects in an image. Thanks to the nature of the message passing neural network (MPNN) that models high-order interactions between objects and their neighboring objects, they are dominant representation learning modules for SGG. However, existing MPNN-based frameworks assume the scene graph as a homogeneous graph, which restricts the context-awareness of visual relations between objects. That is, they overlook the fact that the relations tend to be highly dependent on the objects with which the relations are associated. In this paper, we propose an unbiased heterogeneous scene graph generation (HetSGG) framework that captures relation-aware context using message passing neural networks. We devise a novel message passing layer, called relation-aware message passing neural network (RMP), that aggregates the contextual information of an image considering the predicate type between objects. Our extensive evaluations demonstrate that HetSGG outperforms state-of-the-art methods, especially outperforming on tail predicate classes.
translated by 谷歌翻译
When developing deep learning models, we usually decide what task we want to solve then search for a model that generalizes well on the task. An intriguing question would be: what if, instead of fixing the task and searching in the model space, we fix the model and search in the task space? Can we find tasks that the model generalizes on? How do they look, or do they indicate anything? These are the questions we address in this paper. We propose a task discovery framework that automatically finds examples of such tasks via optimizing a generalization-based quantity called agreement score. We demonstrate that one set of images can give rise to many tasks on which neural networks generalize well. These tasks are a reflection of the inductive biases of the learning framework and the statistical patterns present in the data, thus they can make a useful tool for analysing the neural networks and their biases. As an example, we show that the discovered tasks can be used to automatically create adversarial train-test splits which make a model fail at test time, without changing the pixels or labels, but by only selecting how the datapoints should be split between the train and test sets. We end with a discussion on human-interpretability of the discovered tasks.
translated by 谷歌翻译
本文分析了三种具有不同韵律系统的语言的违反语音数据集:英语,韩语和泰米尔语。我们检查39个声学测量值,反映了三个语音维度,包括语音质量,发音和韵律。作为多语言分析,通过可理解水平对声学测量的平均值进行检查。此外,执行自动清晰度分类以审查语言设置的最佳功能。分析表明发音特征,例如正确的辅音百分比,正确的元音百分比以及正确的音素比例为语言无关的测量。但是,语音质量和韵律特征通常通过语言呈现不同的方面。实验结果还表明,不同的语音维度对不同的语言起着更大的作用:英语的韵律,韩语的发音,韵律和泰米尔语的发音。本文有助于言语病理学,因为它在英语,韩语和泰米尔语构想中的可理解分类中区分了与语言无关和语言依赖性测量。
translated by 谷歌翻译
本文提出了一种针对英语,韩语和泰米尔语的跨语性分类方法,该方法采用了与语言无关的功能和语言唯一功能。首先,我们从语音质量,发音和韵律等各种语音维度中提取39个特征。其次,应用功能选择来确定每种语言的最佳功能集。通过比较三种语言的特征选择结果来区分一组共享功能和一组独特的功能。最后,使用两个功能集,进行自动严重性分类。值得注意的是,所提出的方法删除了语言的不同特征,以防止其他语言的唯一特征的负面影响。因此,由于其强度归因于缺失的数据,因此采用了极端梯度提升(XGBoost)算法进行分类。为了验证我们提出的方法的有效性,进行了两个基线实验:使用单语言特征集的交点集(交叉路口)和使用单语语言特征集(UNIOM)的联合集合进行实验。根据实验结果,我们的方法以67.14%的F1得分获得更好的性能,而交叉路口实验为64.52%,联合实验为66.74%。此外,所提出的方法比所有三种语言的单语言分类都能获得更好的性能,分别达到17.67%,2.28%,7.79%的相对百分比增加了英语,韩语和泰米尔语。结果规定,必须单独考虑通常共享特征和特定于语言的特征,以进行跨语音质心严重性分类。
translated by 谷歌翻译